Denumerable Constrained Markov Decision Problems and Finite Approximations Denumerable Constrained Markov Decision Problems and Finite Approximations

نویسنده

  • Eitan Altman
چکیده

The purpose of this paper is two fold. First to establish the Theory of discounted constrained Markov Decision Processes with a countable state and action spaces with general multi-chain structure. Second, to introduce nite approximation methods. We deene the occupation measures and obtain properties of the set of all achievable occupation measures under the diierent admissible policies. We establish the optimality of stationary policies for the constrained control problem, and obtain a LP with a countable number of decision variables through which optimal stationary policies are computed. Since for such a LP one cannot expect to nd an optimal solution in a nite number of operations, we present two schemes for nite approximations and establish the convergence of optimal values and policies for both the discounted and the expected average cost, with unbounded cost. Sometimes it turns to be easier to solve the problem with innnite state space than the problem with nite yet large state space. Based on the optimal policy for the problem with innnite state space, we construct policies which are almost optimal for the problem with truncated state space. This method is applied to obtain an-optimal policy for a problem of optimal priority assignment under constraints for a system of K nite queues.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Denumerable State Nonhomogeneous Markov Decision Processes

We consider denumerable state nonhomogeneous Markov decision processes and extend results from both denumerable state homogeneous and finite state nonhomogeneous problems. We show that, under weak ergodicity, accumulation points of finite horizon optima (termed algorithmic optima) are average cost optimal. We also establish the existence of solution horizons. Finally, an algorithm is presented ...

متن کامل

Denumerable Constrained Markov Decision Processes and Finite Approximations

The purpose of this paper is two fold. First to establish the Theory of discounted constrained Markov Decision Processes with a countable state and action spaces with general multi-chain structure. Second, to introduce nite approximation methods. We deene the occupation measures and obtain properties of the set of all achievable occupation measures under the diierent admissible policies. We est...

متن کامل

Approximate Linear Programming for Constrained Partially Observable Markov Decision Processes

In many situations, it is desirable to optimize a sequence of decisions by maximizing a primary objective while respecting some constraints with respect to secondary objectives. Such problems can be naturally modeled as constrained partially observable Markov decision processes (CPOMDPs) when the environment is partially observable. In this work, we describe a technique based on approximate lin...

متن کامل

A Convex Analytic Approach to Risk-Aware Markov Decision Processes

Abstract. In classical Markov decision process (MDP) theory, we search for a policy that say, minimizes the expected infinite horizon discounted cost. Expectation is of course, a risk neutral measure, which does not su ce in many applications, particularly in finance. We replace the expectation with a general risk functional, and call such models risk-aware MDP models. We consider minimization ...

متن کامل

On the Asymptotic Optimality of Finite Approximations to Markov Decision Processes with Borel Spaces

Abstract. Calculating optimal policies is known to be computationally difficult for Markov decision processes with Borel state and action spaces and for partially observed Markov decision processes even with finite state and action spaces. This paper studies finite-state approximations of discrete time Markov decision processes with Borel state and action spaces, for both discounted and average...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1992